Simulations of complementarity systems using time-stepping
One of the useful outcomes of the theory of complementarity systems is a new family of methods for numerical simulation of discontinuous systems. Here we will demonstrate the essence by introducing the method of time-stepping. And we do it by means of an example.
Example 1 (Simulation using time-stepping) Consider the following discontinuous dynamical system in \mathbb R^2:
\begin{aligned}
\dot x_1 &= -\operatorname{sign} x_1 + 2 \operatorname{sign} x_2\\
\dot x_2 &= -2\operatorname{sign} x_1 -\operatorname{sign} x_2.
\end{aligned}
Figure 3: Trajectory of the discontinuous system in the state space
Now, how fast does the solution approach the origin?
Let’s use the 1-norm \|\bm x\|_1 = |x_1| + |x_2| to measure how far the trajectory is from the origin. We then ask:
\frac{\mathrm d}{\mathrm dt}\|\bm x\|_1 = ?
We avoid the troubles with nonsmoothness of the absolute value by consider each quadrant separately. Let’s start in the first (upper right) quadrant, that is, x_1>0 and x_2>0, and therefore |x_1| = x_1, \;|x_2| = x_2, and therefore
\frac{\mathrm d}{\mathrm dt}\|\bm x\|_1 = \dot x_1 + \dot x_2 = 1 - 3 = -2.
The situation is identical in the other quadrants. And, of course, undefined on the axes.
The conclusion is that the trajectory will hit the origin in finite time: for, say, x_1(0) = 1 and x_2(0) = 1 , the trajectory hits the origin at t=(|x_1(0)|+|x_2(0)|)/2 = 1. But with an infinite number of revolutions around the origin…
How will a standard algoritm for numerical simulation handle this? Let’s have a look at that.
Instead solving the above nonlinear equations with discontinuities, we introduce new variables u_1 and u_2 as the outputs of the \operatorname{sign} functions:
\begin{aligned}
{\color{blue} x_{1,k+1}} &= x_{1,k} + h (-{\color{blue}u_{1}} + 2 {\color{blue}u_{2}})\\
{\color{blue} x_{2,k+1}} &= x_{1,k} + h (-2{\color{blue}u_{1}} - {\color{blue}u_{2}}).
\end{aligned}
But now we have to enforce the relation between \bm u and \bm x_{k+1}. Recall the standard definition of the \operatorname{sign} function is
\operatorname{sign}(x) = \begin{cases}
1 & x>0\\
0 & x=0\\
-1 & x<0,
\end{cases}
but we change the definition to a set-valued function
\begin{cases}
\operatorname{sign}(x) = 1 & x>0\\
\operatorname{sign}(x) \in [-1,1] & x=0\\
\operatorname{sign}(x) = -1 & x<0.
\end{cases}
Accordingly, we set the relationship between \bm u and \bm x to
\begin{cases}
u_1 = 1 & x_1>0\\
u_1 \in [-1,1] & x_1=0\\
u_1 = -1 & x_1<0,
\end{cases}
and
\begin{cases}
u_2 = 1 & x_2>0\\
u_2 \in [-1,1] & x_2=0\\
u_2 = -1 & x_2<0.
\end{cases}
But these are mixed complementarity contraints we have defined previously!
\boxed{
\begin{aligned}
\begin{bmatrix}
{\color{blue} x_{1,k+1}}\\
{\color{blue} x_{1,k+1}}
\end{bmatrix}
&=
\begin{bmatrix}
x_{1,k}\\
x_{2,k}
\end{bmatrix} + h
\begin{bmatrix}
-1 & 2 \\
-2 & -1
\end{bmatrix}
\begin{bmatrix}
{\color{blue}u_{1}}\\
{\color{blue}u_{2}}
\end{bmatrix}\\
-1 \leq {\color{blue} u_1} \leq 1 \quad &\bot \quad -{\color{blue}x_{1,k+1}}\\
-1 \leq {\color{blue} u_2} \leq 1 \quad &\bot \quad -{\color{blue}x_{2,k+1}}.
\end{aligned}
}
9 possible combinations
There are now 9 possible combinations of the values of u_1 and u_2. Let’s explore some: x_{1,k+1} = x_{2,k+1} = 0, while u_1 \in [-1,1] and u_2 \in [-1,1]:
How does the set of states from which the next state is zero look like?
\begin{aligned}
-\begin{bmatrix}
-1 & 2 \\
-2 & -1
\end{bmatrix}^{-1}
\begin{bmatrix}
x_{1,k}\\
x_{2,k}
\end{bmatrix}
&= h
\begin{bmatrix}
{\color{blue}u_{1}}\\
{\color{blue}u_{2}}
\end{bmatrix}\\
-1 \leq {\color{blue} u_1} \leq 1, \quad -1 &\leq {\color{blue} u_2} \leq 1
\end{aligned}